在各种领域,包括搜索和救援,自动驾驶汽车导航和侦察的各个领域,形成不断变化的场景的非线图像(NLOS)图像的能力可能具有变革性。大多数现有的活性NLOS方法使用针对继电器表面并收集回返回光的时间分辨测量的脉冲激光来照亮隐藏场景。流行的方法包括对垂直壁上的矩形网格的栅格扫描,相对于感兴趣的数量,以产生共聚焦测量集合。这些固有地受到激光扫描的需求的限制。避免激光扫描的方法将隐藏场景的运动部件作为一个或两个点目标。在这项工作中,基于更完整的光学响应建模,但仍没有多个照明位置,我们演示了运动中对象的准确重建和背后的固定风景的“地图”。计数,本地化和表征运动中隐藏物体的大小,结合固定隐藏场景的映射的能力,可以大大提高各种应用中的室内情况意识。
translated by 谷歌翻译
Periocular refers to the region of the face that surrounds the eye socket. This is a feature-rich area that can be used by itself to determine the identity of an individual. It is especially useful when the iris or the face cannot be reliably acquired. This can be the case of unconstrained or uncooperative scenarios, where the face may appear partially occluded, or the subject-to-camera distance may be high. However, it has received revived attention during the pandemic due to masked faces, leaving the ocular region as the only visible facial area, even in controlled scenarios. This paper discusses the state-of-the-art of periocular biometrics, giving an overall framework of its most significant research aspects.
translated by 谷歌翻译
Biometrics is the science of identifying an individual based on their intrinsic anatomical or behavioural characteristics, such as fingerprints, face, iris, gait, and voice. Iris recognition is one of the most successful methods because it exploits the rich texture of the human iris, which is unique even for twins and does not degrade with age. Modern approaches to iris recognition utilize deep learning to segment the valid portion of the iris from the rest of the eye, so it can then be encoded, stored and compared. This paper aims to improve the accuracy of iris semantic segmentation systems by introducing a novel data augmentation technique. Our method can transform an iris image with a certain dilation level into any desired dilation level, thus augmenting the variability and number of training examples from a small dataset. The proposed method is fast and does not require training. The results indicate that our data augmentation method can improve segmentation accuracy up to 15% for images with high pupil dilation, which creates a more reliable iris recognition pipeline, even under extreme dilation.
translated by 谷歌翻译
Iris segmentation is the initial step to identify biometric of animals to establish a traceability system of livestock. In this study, we propose a novel deep learning framework for pixel-wise segmentation with minimum use of annotation labels using BovineAAEyes80 public dataset. In the experiment, U-Net with VGG16 backbone was selected as the best combination of encoder and decoder model, demonstrating a 99.50% accuracy and a 98.35% Dice coefficient score. Remarkably, the selected model accurately segmented corrupted images even without proper annotation data. This study contributes to the advancement of the iris segmentation and the development of a reliable DNNs training framework.
translated by 谷歌翻译
Periocular recognition has gained attention recently due to demands of increased robustness of face or iris in less controlled scenarios. We present a new system for eye detection based on complex symmetry filters, which has the advantage of not needing training. Also, separability of the filters allows faster detection via one-dimensional convolutions. This system is used as input to a periocular algorithm based on retinotopic sampling grids and Gabor spectrum decomposition. The evaluation framework is composed of six databases acquired both with near-infrared and visible sensors. The experimental setup is complemented with four iris matchers, used for fusion experiments. The eye detection system presented shows very high accuracy with near-infrared data, and a reasonable good accuracy with one visible database. Regarding the periocular system, it exhibits great robustness to small errors in locating the eye centre, as well as to scale changes of the input image. The density of the sampling grid can also be reduced without sacrificing accuracy. Lastly, despite the poorer performance of the iris matchers with visible data, fusion with the periocular system can provide an improvement of more than 20%. The six databases used have been manually annotated, with the annotation made publicly available.
translated by 谷歌翻译
In this study the authors will look at the detection and segmentation of the iris and its influence on the overall performance of the iris-biometric tool chain. The authors will examine whether the segmentation accuracy, based on conformance with a ground truth, can serve as a predictor for the overall performance of the iris-biometric tool chain. That is: If the segmentation accuracy is improved will this always improve the overall performance? Furthermore, the authors will systematically evaluate the influence of segmentation parameters, pupillary and limbic boundary and normalisation centre (based on Daugman's rubbersheet model), on the rest of the iris-biometric tool chain. The authors will investigate if accurately finding these parameters is important and how consistency, that is, extracting the same exact region of the iris during segmenting, influences the overall performance.
translated by 谷歌翻译
身份验证系统容易受到模型反演攻击的影响,在这种攻击中,对手能够近似目标机器学习模型的倒数。生物识别模型是这种攻击的主要候选者。这是因为反相生物特征模型允许攻击者产生逼真的生物识别输入,以使生物识别认证系统欺骗。进行成功模型反转攻击的主要限制之一是所需的训练数据量。在这项工作中,我们专注于虹膜和面部生物识别系统,并提出了一种新技术,可大大减少必要的训练数据量。通过利用多个模型的输出,我们能够使用1/10进行模型反演攻击,以艾哈迈德和富勒(IJCB 2020)的训练集大小(IJCB 2020)进行虹膜数据,而Mai等人的训练集大小为1/1000。 (模式分析和机器智能2019)的面部数据。我们将新的攻击技术表示为结构性随机,并损失对齐。我们的攻击是黑框,不需要了解目标神经网络的权重,只需要输出向量的维度和值。为了显示对齐损失的多功能性,我们将攻击框架应用于会员推理的任务(Shokri等,IEEE S&P 2017),对生物识别数据。对于IRIS,针对分类网络的会员推断攻击从52%提高到62%的准确性。
translated by 谷歌翻译
这项研究评估了脑电图数据的区分能力(唯一性)从脑电图公共数据集的方式互相认证相对及其持久性。除了脑电图数据外,Luciw等人。提供EMG(肌电图)和运动学数据,以使工程师和研究人员利用eeg Gal进行进一步的研究。但是,评估EMG和运动学数据不在本研究的范围之内。最新的目的是确定是否可以利用脑电图数据来控制假体设备。另一方面,本研究旨在通过脑电图数据来评估个体的可分离性,以执行用户身份验证。功能重要性算法用于选择每个用户的最佳功能,以对其进行对验证。该研究实施的身份验证平台基于机器学习模型/分类器。作为初始测试,使用线性判别分析(LDA)和支持向量机(SVM)进行了两项初步研究,以通过多标记EEG数据集观察模型的学习趋势。首先利用KNN作为用户身份验证的分类器,观察到精度约为75%。此后,用于提高线性和非线性SVM的性能。使用线性和非线性SVM可实现85.18%和86.92%的总体平均精度。除精度外,还计算了F1分数。线性和非线性SVM的总平均F1得分分别为87.51%和88.94%。除总体表现外,还观察到使用线性SVM和97.4%的精度和97.4%的精度(97.3%F1得分)使用非线性SVM的高表现精度(95.3%F1得分)。
translated by 谷歌翻译
癌症是世界上第二大死亡原因。尽早诊断癌症可以挽救许多生命。病理学家必须手动查看组织微阵列(TMA)图像,以鉴定肿瘤,这些肿瘤可能耗时,不一致且主观。自动检测肿瘤的现有算法要么没有达到病理学家的准确性水平,要么需要大量的人类参与。一个主要的挑战是,具有不同形状,大小和位置的TMA图像可以具有相同的分数。在TMA图像中学习染色模式需要大量图像,由于医疗组织的隐私问题和法规,这些图像受到严重限制。来自不同癌症类型的TMA图像可能具有可以提供有价值信息的共同特征,但使用它们直接损害了准确性。通过从不同癌症类型的组织图像中提取知识,采用转移学习来增加训练样本量。转移学习使算法有可能打破关键的准确性障碍。拟议的算法报告,来自斯坦福组织微阵列数据库的乳腺癌TMA图像的准确性为75.9%,达到了75%的病理学家准确度。这将使病理学家可以自信地使用自动算法来帮助他们实时识别肿瘤,以实时的准确性较高。
translated by 谷歌翻译
这项研究提出了一种新的数据库和方法,以检测由于酒精,药物消耗和昏昏欲睡而导致的警报条件的减少,而近亲(NIR)眼球周围眼部图像。该研究的重点是确定外部因素对中枢神经系统(CNS)的影响。目的是分析这如何影响虹膜和学生运动行为,以及是否可以用标准的IRIS NIR捕获装置对这些更改进行分类。本文提出了修改的MobileNetV2,以对来自酒精/药物/嗜睡影响的受试者拍摄的虹膜NIR图像进行分类。结果表明,基于MobileNETV2的分类器可以在耐心等方面从饮酒和药物消耗后捕获的虹膜样品的不合适性条件,分别检测精度分别为91.3%和99.1%。嗜睡状况是最具挑战性的72.4%。对于属于FIT/UNFIT类的两类分组图像,该模型的准确度分别为94.0%和84.0%,使用的参数数量较小,而不是标准的深度学习网络算法。这项工作是开发自动系统以对“适合值班”进行分类并防止因酒精/吸毒和嗜睡而导致事故的生物识别应用程序迈出的一步。
translated by 谷歌翻译